22 research outputs found

    An experiment in Interactive Retrieval for the lifelog moment retrieval task at imageCLEFlifelog2020.

    Get PDF
    The development of technology has led to an increase in mobile devices’ use to keep track of individual daily activities, as known as Lifelogging. Lifelogging has raised many research challenges, one of which is how to retrieve a specific moment in response to a user’s information need. This paper presents an efficient interactive search engine for large multimodal lifelog data which is evaluated in the ImageCLEFlifelog2020 Lifelog Moment Retrieval task (LMRT). The system is the modified version of the Myscéal demonstrator used in the Lifelog Search Challenge 2020, with the addition of visual similarity and a new method of visualising results. In interactive experimentation, our system achieved an F1@ 10 score of 0.48 in the official submission but can be significantly improved by implementing a number of post-processing steps

    Myscéal: an experimental interactive lifelog retrieval system for LSC'20

    Get PDF
    The Lifelog Search Challenge (LSC), is an annual comparative benchmarking activity for comparing approaches to interactive retrieval from multi-modal lifelogs. Being an interactive search challenge, issues such as retrieval accuracy, search speed and usability of interfaces are key challenges that must be addressed by every participant. In this paper, we introduce Myscéal, an interactive lifelog retrieval engine designed to support novice users to retrieve items of interest from a large multimodal lifelog. Additionally, we also introduce a new similarity measure called “aTFIDF”, to match a user’s free-text information need with the multimodal lifelog index

    Myscéal 2.0: a revised experimental interactive lifelog retrieval system for LSC'21

    Get PDF
    Building an interactive retrieval system for lifelogging contains many challenges due to massive multi-modal personal data besides the requirement of accuracy and rapid response for such a tool. The Lifelog Search Challenge (LSC) is the international lifelog retrieval competition that inspires researchers to develop their systems to cope with the challenges and evaluates the effectiveness of their solutions. In this paper, we upgrade our previous Myscéal 2.0 and present Myscéal 2.0 system for the LSC'21 with the improved features inspired by the novice users experiments. The experiments show that a novice user achieved more than half of the expert score on average. To mitigate the gap of them, some potential enhancements were identified and integrated to the enhanced version

    VRLE: Lifelog Interaction Prototype in Virtual Reality:Lifelog Search Challenge at ACM ICMR 2020

    Get PDF
    The Lifelog Search Challenge (LSC) invites researchers to share their prototypes for interactive lifelog retrieval and encourages competition to develop and evaluate effective methodologies to achieve this. With this paper we present a novel approach to visual lifelog exploration based on our research to date utilising virtual reality as a medium for interactive information retrieval. The VRLE prototype presented is an iteration on a previous system which won the first LSC competition at ACM ICMR 2018

    E-Myscéal: embedding-based Interactive lifelog retrieval system for LSC'22

    Get PDF
    Developing interactive lifelog retrieval systems is a growing research area. There are many international competitions for lifelog retrieval that encourage researchers to build effective systems that can address the multimodal retrieval challenge of lifelogs. The Lifelog Search Challenge (LSC) was first organised in 2018 and is currently the only interactive benchmarking evaluation for lifelog retrieval systems. Participating systems should have an accurate search engine and a user-friendly interface that can help users to retrieve relevant content. In this paper, we upgrade our previous MyScéal, which was the top performing system in LSC'20 and LSC'21, and present E-MyScéal for LSC'22, which includes a completely different search engine. Instead of using visual concepts for retrieval such as MyScéal, the new E-MyScéal employs an embedding technique that facilitates novice users who are not familiar with the concepts. Our experiments show that the new search engine can find relevant images in the first place in the ranked list, four a quarter of the LSC'21 queries (26%) by using just the first hint from the textual information need. Regarding the user interface, we still keep the simple non-faceted design as in the previous version but improve the event view browsing in order to better support novice users

    DCU team at the NTCIR-15 micro-activity retrieval task

    Get PDF
    The growing attention to lifelogging research has led to the creation of many retrieval systems, most of which employed event segmentation as core functionality. While previous literature focused on splitting lifelog data into broad segments of daily living activities, less attention was paid to micro-activities which last for short periods of time, yet carry valuable information for building a high-precision retrieval engine. In this paper, we present our efforts in addressing the NTCIR-15 MART challenge, in which the participants were asked to retrieve micro-activities from a multi-modal dataset. We proposed five models which investigate imagery and sensory data, both jointly and separately using various Deep Learn- ing and Machine Learning techniques, and achieved a maximum mAP score of 0.901 using an Image Tabular Pair-wise Similarity model, and overall ranked second in the competition. Our model not only captures the information coming from the temporal visual data combined with sensor signal, but also works as a Siamese network to discriminate micro-activities

    A VR interface for browsing visual spaces at VBS2021

    Get PDF
    The Video Browser Showdown (VBS) is an annual competition in which each participant prepares an interactive video retrieval system and partakes in a live comparative evaluation at the annual MMMConference. In this paper, we introduce Eolas, which is a prototype video/image retrieval system incorporating a novel virtual reality (VR)interface. For VBS’21, Eolas represented each keyframe of the collection by an embedded feature in a latent vector space, into which a query would also be projected to facilitate retrieval within a VR environment. A user could then explore the space and perform one of a number of filter operations to traverse the space and locate the correct result

    An Exploration into the Benefits of the CLIP model for Lifelog Retrieval

    Get PDF
    In this paper, we attempt to fine-tune the CLIP (Contrastive Language-Image Pre-Training) model on the Lifelog Question Answering dataset (LLQA) to investigate retrieval performance of the fine-tuned model over the zero-shot baseline model. We train the model adopting a weight space ensembling approach using a modified loss function to take into account the differences in our dataset (LLQA) when compared with the dataset the CLIP model was originally pretrained on. We further evaluate our fine-tuned model using visual as well as multimodal queries on multiple retrieval tasks, demonstrating improved performance over the zero-shot baseline model

    Interactive video retrieval evaluation at a distance: comparing sixteen interactive video search systems in a remote setting at the 10th Video Browser Showdown

    Get PDF
    The Video Browser Showdown addresses difficult video search challenges through an annual interactive evaluation campaign attracting research teams focusing on interactive video retrieval. The campaign aims to provide insights into the performance of participating interactive video retrieval systems, tested by selected search tasks on large video collections. For the first time in its ten year history, the Video Browser Showdown 2021 was organized in a fully remote setting and hosted a record number of sixteen scoring systems. In this paper, we describe the competition setting, tasks and results and give an overview of state-of-the-art methods used by the competing systems. By looking at query result logs provided by ten systems, we analyze differences in retrieval model performances and browsing times before a correct submission. Through advances in data gathering methodology and tools, we provide a comprehensive analysis of ad-hoc video search tasks, discuss results, task design and methodological challenges. We highlight that almost all top performing systems utilize some sort of joint embedding for text-image retrieval and enable specification of temporal context in queries for known-item search. Whereas a combination of these techniques drive the currently top performing systems, we identify several future challenges for interactive video search engines and the Video Browser Showdown competition itself

    MemoriEase: an interactive lifelog retrieval system for LSC’23

    Get PDF
    Lifelogging is an activity of recording all events that happen in the daily life of an individual. The events can contain images, audio, health index, etc which are collected through various devices such as wearable cameras, smartwatches, and other digital services. Exploiting lifelog data can bring significant benefits for lifeloggers from creating personalized healthcare plans to retrieving events in the past. In recent years, there has been a growing development of interactive lifelog retrieval systems, such as competitors at the annual Lifelog Search Challenge (LSC), to assist lifeloggers in finding events from the past. This paper introduces an interactive lifelog image retrieval called MemoriEase for the LSC’23 challenge. This system combines concept-based and embedding-based retrieval approaches to answer accurate images for LSC’23 queries. This system uses BLIP for the embedding-based retrieval approach to reduce the semantic gap between images and text queries. The concept-based retrieval approach uses full-text search in Elasticsearch to retrieve images having visual concepts similar to keywords in the query. Regarding the user interface, we make it as simple as possible to make novices users can use it with only a small effort. This is the first version of MemoriEase and we expect this can help users perform well in the LSC’23 competitio
    corecore